On Nonconvex Decentralized Gradient Descent

نویسندگان

  • Jinshan Zeng
  • Wotao Yin
چکیده

Consensus optimization has received considerable attention in recent years. A number of decentralized algorithms have been proposed for convex consensus optimization. However, on consensus optimization with nonconvex objective functions, our understanding to the behavior of these algorithms is limited. When we lose convexity, we cannot hope for obtaining globally optimal solutions (though we still do sometimes). Somewhat surprisingly, we retain most other properties from the convex setting for the decentralized consensus algorithms DGD and Prox-DGD, such as convergence to a (neighborhood of) consensus stationary solution and the rates of convergence when diminishing (constant) step sizes are used. It is worth noting that the Prox-DGD algorithm can handle nonconvex nonsmooth functions assume that their proximal operators can be computed. To establish some of these properties, the existing proofs from the convex setting need changes, and some results require a completely different line of analysis.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Proximal Gradient Algorithm for Decentralized Composite Optimization over Directed Networks

This paper proposes a fast decentralized algorithm for solving a consensus optimization problem defined in a directed networked multi-agent system, where the local objective functions have the smooth+nonsmooth composite form, and are possibly nonconvex. Examples of such problems include decentralized compressed sensing and constrained quadratic programming problems, as well as many decentralize...

متن کامل

Stochastic Variance Reduction for Nonconvex Optimization

We study nonconvex finite-sum problems and analyze stochastic variance reduced gradient (Svrg) methods for them. Svrg and related methods have recently surged into prominence for convex optimization given their edge over stochastic gradient descent (Sgd); but their theoretical analysis almost exclusively assumes convexity. In contrast, we prove non-asymptotic rates of convergence (to stationary...

متن کامل

Decentralized Frank-Wolfe Algorithm for Convex and Nonconvex Problems

Decentralized optimization algorithms have received much attention due to the recent advances in network information processing. However, conventional decentralized algorithms based on projected gradient descent are incapable of handling high dimensional constrained problems, as the projection step becomes computationally prohibitive to compute. To address this problem, this paper adopts a proj...

متن کامل

Fast Incremental Method for Nonconvex Optimization

We analyze a fast incremental aggregated gradient method for optimizing nonconvex problems of the form minx ∑ i fi(x). Specifically, we analyze the Saga algorithm within an Incremental First-order Oracle framework, and show that it converges to a stationary point provably faster than both gradient descent and stochastic gradient descent. We also discuss a Polyak’s special class of nonconvex pro...

متن کامل

Parallel Asynchronous Stochastic Variance Reduction for Nonconvex Optimization

Nowadays, asynchronous parallel algorithms have received much attention in the optimization field due to the crucial demands for modern large-scale optimization problems. However, most asynchronous algorithms focus on convex problems. Analysis on nonconvex problems is lacking. For the Asynchronous Stochastic Descent (ASGD) algorithm, the best result from (Lian et al., 2015) can only achieve an ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1608.05766  شماره 

صفحات  -

تاریخ انتشار 2016